Goto

Collaborating Authors

 autonomous and intelligent system


How the Financial Industry Can Apply AI Responsibly

#artificialintelligence

THE INSTITUTE Artificial intelligence is transforming the financial services industry. The technology is being used to determine creditworthiness, identify money laundering, and detect fraud. AI also is helping to personalize services and recommend new offerings by developing a better understanding of customers. Chatbots and other AI assistants have made it easier for clients to get answers to their questions, 24/7. Although confidence in financial institutions is high, according to the Banking Exchange, that's not the case with AI.


AI development must be guided by ethics, human wellbeing and responsible innovation

#artificialintelligence

The topic of ethics and artificial intelligence is not new, but businesses and policy creators should prioritize human wellbeing and environmental flourishing – also known as societal value – in the discussion, says John C. Havens, director of emerging technology and strategic development at the IEEE Standards Association. Typically, ethical concerns tied to AI largely focus on risk, harm and responsibility; bias against race and gender; unintended consequences; and cybersecurity and hackers. These are important concerns, but Havens contends that as AI systems are created, they must directly address human-centric, values-driven issues as key performance indicators of success to build trust with end users. Havens further says that AI systems must also prioritize human wellbeing (specifically, aspects of caregiving, mental health and physiological needs not currently included in the GDP) and environmental flourishing as the ultimate metrics of success for society along with fiscal prosperity. Healthcare IT News sat down with Havens, author of "Heartificial Intelligence: Embracing Humanity to Maximize Machines," to discuss these and other important issues surrounding AI and ethics.


Responsible AI

Communications of the ACM

The high expectations of AI have triggered worldwide interest and concern, generating 400 policy documents on responsible AI. Intense discussions over the ethical issues lay a helpful foundation, preparing researchers, managers, policy makers, and educators for constructive discussions that will lead to clear recommendations for building the reliable, safe, and trustworthy systems6 that will be commercial success. This Viewpoint focuses on four themes that lead to 15 recommendations for moving forward. The four themes combine AI thinking with human-centered User Experience Design (UXD). Ethical discussions are a vital foundation, but raising the edifice of responsible AI requires design decisions to guide software engineering teams, business managers, industry leaders, and government policymakers.


The State of AI Ethics Report (October 2020)

Gupta, Abhishek, Royer, Alexandrine, Heath, Victoria, Wright, Connor, Lanteigne, Camylle, Cohen, Allison, Ganapini, Marianna Bergamaschi, Fancy, Muriam, Galinkin, Erick, Khurana, Ryan, Akif, Mo, Butalid, Renjie, Khan, Falaah Arif, Sweidan, Masa, Balogh, Audrey

arXiv.org Artificial Intelligence

The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.


IEEE Initiative on Ethical Design Is Making Headway - AI Trends

#artificialintelligence

A three-year effort by hundreds of engineers worldwide resulted in the publication in March of 2019 of Ethically Aligned Design (EAD) for Business, a guide for policymakers, engineers, designers, developers and corporations. The effort was headed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), with John C. Havens as Executive Director, who spoke to AI Trends for an Executive Interview. We recently connected to ask how the effort has been going. EAD First Edition, a 290-page document which Havens refers to as "applied ethics," has seen some uptake, for example by IBM, which referred to the IEEE effort within their own resource called Everyday Ethics for AI The IBM document is 26 pages, easy to digest, structured into five areas of focus, each with recommended action steps and an example. The example for Accountability involved an AI team developing applications for a hotel.


School of Law Board of Governors' Eileen Lach - Trailblazer, Thought Leader, Giver St. Thomas Newsroom

#artificialintelligence

Throughout her life, Eileen Lach has been a leader. Growing up in northeast Minneapolis, she knew at a young age she wanted to move to New York City and travel the world. Lach accomplished that and more, carving out a position on Wall Street early in her career and later serving as the first general counsel and chief compliance officer for The Institute of Electrical and Electronics Engineers (IEEE). She is considered a thought leader, especially in the area of ethics and artificial intelligence (AI). During a conversation with Lach last summer, it was hard not to be wowed by her achievements and admire her dedication to philanthropic causes.


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.


Artificial Intelligence: the global landscape of ethics guidelines

Jobin, Anna, Ienca, Marcello, Vayena, Effy

arXiv.org Artificial Intelligence

In the last five years, private companies, research institutions as well as public sector organisations have issued principles and guidelines for ethical AI, yet there is debate about both what constitutes "ethical AI" and which ethical requirements, technical standards and best practices are needed for its realization. To investigate whether a global agreement on these questions is emerging, we mapped and analyzed the current corpus of principles and guidelines on ethical AI. Our results reveal a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy), with substantive divergence in relation to how these principles are interpreted; why they are deemed important; what issue, domain or actors they pertain to; and how they should be implemented. Our findings highlight the importance of integrating guideline-development efforts with substantive ethical analysis and adequate implementation strategies.


Podcast #31: Ethically Aligned Design in Autonomous Systems with John C. Havens

#artificialintelligence

One might easily say about the notion of the ethics of disruptive technology–much like Mark Twain's misattributed missive about the weather–that "everybody talks about it, but nobody does anything." But IEEE, the Institute of Electrical and Electronic Engineers, is doing something. Freshly minted from their Global Initiative on Ethics of Autonomous and Intelligent Systems, is the 290-page first edition of Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems. If that title sounds like a mouthful, it ought to. The issues that need to be addressed, to prevent the summoning of the demon that Elon Musk warns of, are complex.


An updated round up of ethical principles of robotics and AI

Robohub

This blogpost is an updated round up of the various sets of ethical principles of robotics and AI that have been proposed to date, ordered by date of first publication. I previously listed principles published before December 2017 here; this blogpost appends those principles drafted since January 2018 (plus one in October 2017 I had missed). The principles are listed here (in full or abridged) with links, notes and references but without critique. If there any (prominent) ones I've missed please let me know. I have included these to explicitly acknowledge, firstly, that Asimov undoubtedly established the principle that robots (and by extension AIs) should be governed by principles, and secondly that many subsequent principles have been drafted as a direct response.